recommendation system
- Asia > China > Hong Kong (0.04)
- Oceania > Australia > Queensland (0.04)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Virginia (0.04)
Dual Collaborative LLMs via Continual Fine-Tuning for Serendipitous Recommendation
Lin, Hongxiang, Guo, Hao, Li, Zeshun, Xue, Erpeng, He, Yongqian, Hou, Xiangyu, Hu, Zhaoyu, Wang, Lei, Chen, Sheng
Traditional recommendation systems tend to trap users in strong feedback loops by excessively pushing content aligned with their historical preferences, thereby limiting exploration opportunities and causing content fatigue. Although large language models (LLMs) demonstrate potential with their diverse content generation capabilities, existing LLM-enhanced dual-model frameworks face two major limitations: first, they overlook long-term preferences driven by group identity, leading to biased interest modeling; second, they suffer from static optimization flaws, as a one-time alignment process fails to leverage incremental user data for closed-loop optimization. To address these challenges, we propose the Co-Evolutionary Alignment (CoEA) method. For interest modeling bias, we introduce Dual-Stable Interest Exploration (DSIE) module, jointly modeling long-term group identity and short-term individual interests through parallel processing of behavioral sequences. For static optimization limitations, we design a Periodic Collaborative Optimization (PCO) mechanism. This mechanism regularly conducts preference verification on incremental data using the Relevance LLM, then guides the Novelty LLM to perform fine-tuning based on the verification results, and subsequently feeds back the output of the continually fine-tuned Novelty LLM to the Relevance LLM for re-evaluation, thereby achieving a dynamic closed-loop optimization. Extensive online and offline experiments verify the effectiveness of the CoEA model in serendipitous recommendation.
- Asia > China > Beijing > Beijing (0.05)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Energy (0.55)
- Information Technology (0.48)
GENIUS: An Agentic AI Framework for Autonomous Design and Execution of Simulation Protocols
Soleymanibrojeni, Mohammad, Aydin, Roland, Guedes-Sobrinho, Diego, Dias, Alexandre C., Piotrowski, Maurício J., Wenzel, Wolfgang, Rêgo, Celso Ricardo Caldeira
Computational simulations have revolutionized materials design, accelerating innovation by allowing researchers to explore material properties and their behaviors virtually before experimental validation[1-4]. This shift has led to significant breakthroughs that range from energy storage[5, 6] to pharmaceutical development[7, 8]. However, a persistent challenge undermines this potential: the technical barriers to effective simulation setup disproportionately burden researchers, particularly those whose expertise lies in experimental rather than computational domains. When scientists identify a promising new compound, understanding its fundamental properties often requires computational validation. Y et, even seemingly straightforward simulations frequently lead to lengthy technical challenges. Even experienced computational scientists (physicists, chemists, engineers) find themselves diverted from scientific inquiry toward navigating complex programming challenges, engaging in trial-and-error attempts, and struggling with computational setup details rather than focusing on the scientific questions[9]. Integrated Computational Materials Engineering (ICME) has emerged as a robust framework to accelerate materials development by synergizing experimental data, simulations, and theoretical models across multiple scales.
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
- South America > Brazil > Federal District > Brasília (0.04)
- South America > Brazil > Paraná > Curitiba (0.04)
- North America > Montserrat (0.04)
- Materials (0.66)
- Energy > Energy Storage (0.48)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.34)
Quantifying the Potential to Escape Filter Bubbles: A Behavior-Aware Measure via Contrastive Simulation
Feng, Difu, Xu, Qianqian, Wang, Zitai, Hua, Cong, Yang, Zhiyong, Huang, Qingming
Nowadays, recommendation systems have become crucial to online platforms, shaping user exposure by accurate preference modeling. However, such an exposure strategy can also reinforce users' existing preferences, leading to a notorious phenomenon named filter bubbles. Given its negative effects, such as group polarization, increasing attention has been paid to exploring reasonable measures to filter bubbles. However, most existing evaluation metrics simply measure the diversity of user exposure, failing to distinguish between algorithmic preference modeling and actual information confinement. In view of this, we introduce Bubble Escape Potential (BEP), a behavior-aware measure that quantifies how easily users can escape from filter bubbles. Specifically, BEP leverages a contrastive simulation framework that assigns different behavioral tendencies (e.g., positive vs. negative) to synthetic users and compares the induced exposure patterns. This design enables decoupling the effect of filter bubbles and preference modeling, allowing for more precise diagnosis of bubble severity. We conduct extensive experiments across multiple recommendation models to examine the relationship between predictive accuracy and bubble escape potential across different groups. To the best of our knowledge, our empirical results are the first to quantitatively validate the dilemma between preference modeling and filter bubbles. What's more, we observe a counter-intuitive phenomenon that mild random recommendations are ineffective in alleviating filter bubbles, which can offer a principled foundation for further work in this direction.
Entropy-Based Measurement of Value Drift and Alignment Work in Large Language Models
Large language model safety is usually assessed with static benchmarks, but key failures are dynamic: value drift under distribution shift, jailbreak attacks, and slow degradation of alignment in deployment. Building on a recent Second Law of Intelligence that treats ethical entropy as a state variable which tends to increase unless countered by alignment work, we make this framework operational for large language models. We define a five-way behavioral taxonomy, train a classifier to estimate ethical entropy S(t) from model transcripts, and measure entropy dynamics for base and instruction-tuned variants of four frontier models across stress tests. Base models show sustained entropy growth, while tuned variants suppress drift and reduce ethical entropy by roughly eighty percent. From these trajectories we estimate an effective alignment work rate gamma_eff and embed S(t) and gamma_eff in a monitoring pipeline that raises alerts when entropy drift exceeds a stability threshold, enabling run-time oversight of value drift.
- North America > United States > Maryland > Prince George's County > Laurel (0.04)
- North America > United States > Colorado (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Jordan (0.04)
Membership Inference Attack against Large Language Model-based Recommendation Systems: A New Distillation-based Paradigm
Cuihong, Li, Xiaowen, Huang, Chuanhuan, Yin, Jitao, Sang
Membership Inference Attack (MIA) aims to determine whether a specific data sample was included in the training dataset of a target model. Traditional MIA approaches rely on shadow models to mimic target model behavior, but their effectiveness diminishes for Large Language Model (LLM)- based recommendation systems due to the scale and complexity of training data. This paper introduces a novel knowledge distillation-based MIA paradigm tailored for LLM-based recommendation systems. Our method constructs a reference model via distillation, applying distinct strategies for member and non-member data to enhance discriminative capabilities. The paradigm extracts fused features (e.g., confidence, entropy, loss, and hidden layer vectors) from the reference model to train an attack model, overcoming limitations of individual features. Extensive experiments on extended datasets (Last.FM, MovieLens, Book-Crossing, Delicious) and diverse LLMs (T5, GPT -2, LLaMA3) demonstrate that our approach significantly outperforms shadow model-based MIAs and individual-feature baselines. The results show its practicality for privacy attacks in LLM-driven recommender systems.
E2E-GRec: An End-to-End Joint Training Framework for Graph Neural Networks and Recommender Systems
Xue, Rui, Zhu, Shichao, Qin, Liang, Pan, Guangmou, Song, Yang, Wu, Tianfu
Graph Neural Networks (GNNs) have emerged as powerful tools for modeling graph-structured data and have been widely used in recommender systems, such as for capturing complex user-item and item-item relations. However, most industrial deployments adopt a two-stage pipeline: GNNs are first pre-trained offline to generate node embeddings, which are then used as static features for downstream recommender systems. This decoupled paradigm leads to two key limitations: (1) high computational overhead, since large-scale GNN inference must be repeatedly executed to refresh embeddings; and (2) lack of joint optimization, as the gradient from the recommender system cannot directly influence the GNN learning process, causing the GNN to be suboptimally informative for the recommendation task. In this paper, we propose E2E-GRec, a novel end-to-end training framework that unifies GNN training with the recommender system. Our framework is characterized by three key components: (i) efficient subgraph sampling from a large-scale cross-domain heterogeneous graph to ensure training scalability and efficiency; (ii) a Graph Feature Auto-Encoder (GFAE) serving as an auxiliary self-supervised task to guide the GNN to learn structurally meaningful embeddings; and (iii) a two-level feature fusion mechanism combined with Gradnorm-based dynamic loss balancing, which stabilizes graph-aware multi-task end-to-end training. Extensive offline evaluations, online A/B tests (e.g., a +0.133% relative improvement in stay duration, a 0.3171% reduction in the average number of videos a user skips) on large-scale production data, together with theoretical analysis, demonstrate that E2E-GRec consistently surpasses traditional approaches, yielding significant gains across multiple recommendation metrics.
- North America > United States > North Carolina (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Asia > China > Sichuan Province > Chengdu (0.04)
TBGRecall: A Generative Retrieval Model for E-commerce Recommendation Scenarios
Liang, Zida, Wu, Changfa, Huang, Dunxian, Sun, Weiqiang, Wang, Ziyang, Yan, Yuliang, Wu, Jian, Jiang, Yuning, Zheng, Bo, Chen, Ke, Zhou, Silu, Zhang, Yu
Recommendation systems are essential tools in modern e-commerce, facilitating personalized user experiences by suggesting relevant products. Recent advancements in generative models have demonstrated potential in enhancing recommendation systems; however, these models often exhibit limitations in optimizing retrieval tasks, primarily due to their reliance on autoregressive generation mechanisms. Conventional approaches introduce sequential dependencies that impede efficient retrieval, as they are inherently unsuitable for generating multiple items without positional constraints within a single request session. To address these limitations, we propose TBGRecall, a framework integrating Next Session Prediction (NSP), designed to enhance generative retrieval models for e-commerce applications. Our framework reformulation involves partitioning input samples into multi-session sequences, where each sequence comprises a session token followed by a set of item tokens, and then further incorporate multiple optimizations tailored to the generative task in retrieval scenarios. In terms of training methodology, our pipeline integrates limited historical data pre-training with stochastic partial incremental training, significantly improving training efficiency and emphasizing the superiority of data recency over sheer data volume. Our extensive experiments, conducted on public benchmarks alongside a large-scale industrial dataset from TaoBao, show TBGRecall outperforms the state-of-the-art recommendation methods, and exhibits a clear scaling law trend. Ultimately, NSP represents a significant advancement in the effectiveness of generative recommendation systems for e-commerce applications.